1,483 research outputs found

    Activation functions, computational goals, and learning rules for local processors with contextual guidance

    Get PDF
    Information about context can enable local processors to discover latent variables that are relevant to the context within which they occur, and it can also guide short-term processing. For example, Becker and Hinton (1992) have shown how context can guide learning, and Hummel and Biederman (1992) have shown how it can guide processing in a large neural net for object recognition. This article studies the basic capabilities of a local processor with two distinct classes of inputs: receptive field inputs that provide the primary drive and contextual inputs that modulate their effects. The contextual predictions are used to guide processing without confusing them with receptive field inputs. The processor's transfer function must therefore distinguish these two roles. Given these two classes of input, the information in the output can be decomposed into four disjoint components to provide a space of possible goals in which the unsupervised learning of Linsker (1988) and the internally supervised learning of Becker and Hinton (1992) are special cases. Learning rules are derived from an information-theoretic objective function, and simulations show that a local processor trained with these rules and using an appropriate activation function has the elementary properties required

    Finding The Beta For A Portfolio Isn't Obvious: An Educational Example

    Get PDF
    When a portfolio is not actively managed to maintain a fixed investment percentage in each asset but rather maintains a fixed number of shares for each asset, the portfolio weights will change over time because the market returns of the different assets will not be the same.  Consequently, portfolio betas computed as a linear combination of asset betas, which is the usual practice, will be different from betas computed using regression techniques on portfolio returns as is done when evaluating individual assets and mutual funds.  The alternative approaches can result in quite different beta statistics and, consequently, inconsistent decisions depending on which method is used.&nbsp

    Why Downside Beta Is Better: An Educational Example

    Get PDF
    An educational example is presented that is an effective teaching illustration to help students understand the difference between traditional CAPM beta and downside (or down-market) beta and why downside beta is a superior measure for use in personal financial planning investment policy statements

    In memorium: Jasper Daube MD

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/156241/2/mus26916_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/156241/1/mus26916.pd

    Asset Attribution Stability And Portfolio Construction: An Educational Example

    Get PDF
    This paper illustrates how a third statistic from asset pricing models, the R-squared statistic, may have information that can help in portfolio construction. Using a traditional CAPM model in comparison to an 18-factor Arbitrage Pricing Style Model, a portfolio separation test is conducted. Portfolio returns and risk metrics are compared using data from the Dow Jones 30 stocks over the period January 2007 through October 2013. Various teaching points are discussed and illustrated

    Flight Investigation of the Effectiveness of an Automatic Aileron Trim Control Device for Personal Airplanes

    Get PDF
    A flight investigation to determine the effectiveness of an automatic aileron trim control device installed in a personal airplane to augment the apparent spiral stability has been conducted. The device utilizes a rate-gyro sensing element in order to switch an on-off type of control that operates the ailerons at a fixed rate through control centering springs. An analytical study using phase-plane and analog-computer methods has been carried out to determine a desirable method of operation for the automatic trim control

    Nonlinear computations in spiking neural networks through multiplicative synapses

    Get PDF
    The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While nonlinear computations can be implemented successfully in spiking neural networks, this requires supervised training and the resulting connectivity can be hard to interpret. In contrast, the required connectivity for any computation in the form of a linear dynamical system can be directly derived and understood with the spike coding network (SCN) framework. These networks also have biologically realistic activity patterns and are highly robust to cell death. Here we extend the SCN framework to directly implement any polynomial dynamical system, without the need for training. This results in networks requiring a mix of synapse types (fast, slow, and multiplicative), which we term multiplicative spike coding networks (mSCNs). Using mSCNs, we demonstrate how to directly derive the required connectivity for several nonlinear dynamical systems. We also show how to carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work demonstrates a novel method for implementing nonlinear computations in spiking neural networks, while keeping the attractive features of standard SCNs (robustness, realistic activity patterns, and interpretable connectivity). Finally, we discuss the biological plausibility of our approach, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.Comment: This article has been peer-reviewed and recommended by Peer Community In Neuroscienc
    • …
    corecore